This is a R Markdown file used for the “Introduction to Open Data Science” course at the University of Helsinki in 2023. The following lines are used to exercise the R Markdown syntax.
Here is a link to my GitHub repository: https://github.com/ntriches/IODS-project
# This is a so-called "R chunk" where you can write R code.
# I don't have a code yet so I will leave this the way it is.
date()
## [1] "Mon Dec 11 16:37:13 2023"
Here below, I try to describe the work and results of the first week a. k. a. “warm up phase” of the IODS 2023 project.
… from “R for Health Data Science” chapter 1-4 and exercise set 1.
This file describes the work and results of the second week a. k. a. “Regression and model validation” of the IDOS2023 course.
The data set includes data which is based on a survey conducted in Finland between 3.12.2014 and 10.1.2015. The aim of the survey was to find the relationship between learning approaches and students achievements in an introductory statistics course in Finland (click here to see more information on the data set and the course). The dataset has 166 observations / rows with 7 variables / columns, of which four are numerical, two are integer and one is a categorical character variable (see output below for details). Note that only few of the originally recorded values are included in this data set.
# message = FALSE will not show any message from R
# Read and summarise file
# load saved file from data wrangling exercise
learning2014 <- read.table(file='/home/ntriches/github_iods2023/IODS23/data/learning2014.csv', header=TRUE, sep = ",")
str(learning2014) # structure of data
## 'data.frame': 166 obs. of 7 variables:
## $ gender : chr "F" "M" "F" "M" ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
dim(learning2014) # dimension of data
## [1] 166 7
The following figure shows a plot matrix of the 7 variables in the dataset, where each variable is plotted versus one other variable. The pink colour shows data from female students, whereas the blue colour shows data from male students of the survey. The scatter plots, distribution and correlation are therefore divided in both female and male students. The distribution of the data shows that the majority of the students were very young (< 25 years old), leading to a right skewed distribution of age. All other variables are relatively normally distributed, with some tendencies towards a left skewed distribution for points and deep, and the male attitude variables. This can be well seen in the outliers of the box plots on the top row of the matrix.
We can see that a better attitude towards the course, represented in a higher number, seems to lead to significantly higher points, in other words, to a better results in the exam (p < 0.001). It also appears that older male students might get less points in the exam (p < 0.1). The strategic learning approach (stra) might lead to better exam results, shown by a positive correlation (p < 0.1). On the other hand, we might assume that deep (deep) or surface learning (surf) are not successful learning approaches, as they show negative correlations with points (p < 0.1)
# fig. width and fig.height enlarge the figure below
# Graphical overview
# load libraries
library(ggplot2)
library(GGally)
# create plot matrix
overview_plot <- learning2014 %>%
ggpairs(mapping = aes(col = gender, alpha = 0.3),
lower = list(combo = wrap("facethist", bins = 20))) +
theme_grey(base_size = 20)
overview_plot
For the multivariable regression model, I selected the three explanatory (predictor) variables age, attitude, and strategic learning (stra). The dependent target variable is points. For my model, I set the significance level at p = 0.05.
The model summary as shown below first shows a reminder of how the model was fit in the function call, then a summary of the distribution of the residuals, the results (“Coefficients”), and an indication of the general model quality (“Residual standard error”, “R-squared”, “F-statistic”). In the table of the model coefficients, we can see the estimated value of the coefficient with its estimated standard error, and the corresponding t-statistic and p-value. In the last part of the output, we can see an estimate of the residual standard error with the corresponding degrees of freedom (166 obs. - 3 = 163). With the multiple R-squared-value, we can see how much of the variance of points has been explained by the model. The adjusted R-squared takes into account the number of variables included in the model. The final line shows us the results of the F-test testing the hypothesis that all coefficients except the intercept are equal to zero.
In my model, attitude shows a positive relationship with points (p < 0.001). In other words, a positive attitude towards the course led to higher exam results: with an increase of 1 in attitude, the points in the exam are estimated to increase by 3.48 points. On the other hand, age and stra did not show any significant relationship with the exam results (p > 0.05). There are trends showing that points in the exams very slightly decrease with age but increase using the strategic learning approach, but they are not significant according to my definition. Overall, the explanatory variables I chose in my model only explain 21.8% of the variation in the data, as shown by the multiple R-squared of 0.2182. The fitted model has a residual standard error of 5.26 points.
If I remove age and stra from my model, attitude remains highly significant (p < 0.001) but the multiple R-squared decreases slightly to 19%. If I run the model with attitude and both stra and age separately, the multiple R-squared increases to 20.48% and 20.11%, respectively, but none of the explanatory variables other than attitude have a p-value higher than 0.05. Overall, the first model including three explanatory variables seemed to best explain the data, and shows that the attitude has a significant influence on the points in the exam.
# Regression model
# create a regression model with three explanatory variables (stra, age, attitude)
lm_model_points_stra_age_attitude <- lm(points ~ stra + age + attitude, data = learning2014)
# show summary of the fitted model
summary(lm_model_points_stra_age_attitude)
##
## Call:
## lm(formula = points ~ stra + age + attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -18.1149 -3.2003 0.3303 3.4129 10.7599
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.89543 2.64834 4.114 6.17e-05 ***
## stra 1.00371 0.53434 1.878 0.0621 .
## age -0.08822 0.05302 -1.664 0.0981 .
## attitude 3.48077 0.56220 6.191 4.72e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.26 on 162 degrees of freedom
## Multiple R-squared: 0.2182, Adjusted R-squared: 0.2037
## F-statistic: 15.07 on 3 and 162 DF, p-value: 1.07e-08
# Does model (R-squared) improve with less explanatory variables?
# create a simple linear regression with attitute
lm_model_points_attitude <- lm(points ~ attitude, data = learning2014)
summary(lm_model_points_attitude) # no
##
## Call:
## lm(formula = points ~ attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.6372 1.8303 6.358 1.95e-09 ***
## attitude 3.5255 0.5674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
# create a regression model with two explanatory variables (stra, attitude)
lm_model_points_stra_attitude <- lm(points ~ stra + attitude, data = learning2014)
summary(lm_model_points_stra_attitude)
##
## Call:
## lm(formula = points ~ stra + attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.6436 -3.3113 0.5575 3.7928 10.9295
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 8.9729 2.3959 3.745 0.00025 ***
## stra 0.9137 0.5345 1.709 0.08927 .
## attitude 3.4658 0.5652 6.132 6.31e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared: 0.2048, Adjusted R-squared: 0.1951
## F-statistic: 20.99 on 2 and 163 DF, p-value: 7.734e-09
# create a regression model with two explanatory variables (age, attitude)
lm_model_points_age_attitude <- lm(points ~ age + attitude, data = learning2014)
summary(lm_model_points_age_attitude)
##
## Call:
## lm(formula = points ~ age + attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.3354 -3.3095 0.2625 4.0005 10.4911
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 13.57244 2.24943 6.034 1.04e-08 ***
## age -0.07813 0.05315 -1.470 0.144
## attitude 3.54392 0.56553 6.267 3.17e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.301 on 163 degrees of freedom
## Multiple R-squared: 0.2011, Adjusted R-squared: 0.1913
## F-statistic: 20.52 on 2 and 163 DF, p-value: 1.125e-08
The assumptions for linear regression are as follows: 1. Linearity, i. e., there is a linear relation between the explanatory and target variable. 2. Homoscedasticity, i. e., the variance of the target variable should be the same across the range of the explanatory variable. 3. Normality of the error terms, i. e., the error terms should follow a normal distribution with mean zero.
Because we have continuous variables, we can only assess the assumptions by using and looking at the residuals. The residuals are the difference between the observed and the fitted values. On the top left plot of the following figure (“Residuals vs Fitted”), we can see that the data is relatively normally distributed, so not deviating from the horizontal axis at Y = 0. Also the spread around the horizontal axis at Y = 0 is not deviating too much, so the assumption of homoscedasticity is fulfilled, too. The normality of the error terms can be seen in the top right plot of the following figure (“Normal Q-Q”). There are no major deviations from normality, so this assumption is also fulfilled. In the bottom left plot (“Residuals vs Leverage”), we can get an estimate on how much influence each observation had in the fitting of the regression model due to its explanatory variables.
# Plot Residuals vs Fitted values, Normal QQ-plot, and Residuals vs Leverage
par(mfrow = c(2,2))
plot(lm_model_points_stra_age_attitude,
which = c(1,2,5))
par(mfrow = c(1,1))
This file describes the work and results of the third week a. k. a. “Logistic regression” of the IDOS2023 course.
The original data set includes data from student performances in the subjects “Mathematics” and “Portuguese language” in secondary education (high school) in two Portuguese schools. It was collected through school reports and questionnaires, and gives information on the demographic and social background of the students, as well as student grades and data on the school they visit (click here to see more information on the data set and access the data). One of the collected variables concerns the alcohol consumption of the students.
The data set we use has 370 observations / rows with 35 variables / columns, of which most are numerical integers and binary characters, but also some nominal characters where more than two options are available. It combines the data from two student alcohol consumption data sets, where ‘alc_use’ gives the average of the weekly alcohol consumption from 1 - very low to 5 - very high, and the logical variable ‘high_use’ = TRUE / FALSE indicates if students consume more than a little amount of alcohol (> 2).
# Read and summarise file
# load saved file from data wrangling exercise
alc <- read.table(file='/home/ntriches/github_iods2023/IODS23/data/alc.csv', header=TRUE, sep = ",")
colnames(alc) # names of variables
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "guardian" "traveltime" "studytime" "schoolsup"
## [16] "famsup" "activities" "nursery" "higher" "internet"
## [21] "romantic" "famrel" "freetime" "goout" "Dalc"
## [26] "Walc" "health" "failures" "paid" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
dim(alc) # dimension of data
## [1] 370 35
The aim of my analysis is to study the relationships between high / low alcohol consumption and the gender of the students (“sex”, female / male), if they go out with friends often (“goout”, 1 - very low to 5 - very high), if they miss school often (“absences”, number of school absences), and how well they performed in their final grades (“G3”, 0 to 20). My personal hypotheses are as follows:
The numerical exploration of our data shows that most students fall under the category of low alcohol consumption (< 2). Females and males are almost equally distributed (195 and 175, respectively). It is evident that in goout and absences, most students are clustered in the categories 3 and 4 (see medians). It is only with the means that we can see some differences.
library(dplyr) # load needed library
# summary statistics by group
# differences between female / male and high / low alcohol consumption, mean of going out, mean of absences, mean of final grade
alc %>%
group_by(sex, high_use) %>%
summarise(count = n(),
mean_goout = mean(goout),
median_goout = median(goout),
mean_absences = mean(absences),
median_absences = median(absences),
mean_grade = mean(G3),
median_grade = median(G3)
)
## # A tibble: 4 × 9
## # Groups: sex [2]
## sex high_use count mean_goout median_goout mean_absences median_absences
## <chr> <lgl> <int> <dbl> <dbl> <dbl> <dbl>
## 1 F FALSE 154 2.95 3 4.25 3
## 2 F TRUE 41 3.39 4 6.85 3
## 3 M FALSE 105 2.70 3 2.91 3
## 4 M TRUE 70 3.93 4 6.1 4
## # ℹ 2 more variables: mean_grade <dbl>, median_grade <dbl>
The clustering of the medians is also evident when looking at the boxplots (see below). In goout, the upper quartile equals the median for males in high use = FALSE, and in high use = TRUE for females. absences show a relatively high amount of outliers but generally an increase in high use = TRUE. In the grades (G3), we can immediately see that there are great differences between females and males.
library(dplyr)
library(ggplot2) # load needed libraries
# install.packages("patchwork") to show boxplots next to each other
library(patchwork)
# box plot showing differences between sex concerning high consumption of alcohol and going out
plot_alc_use_goout_by_sex <- alc %>%
ggplot(aes(x = high_use, y = goout, col = sex)) +
geom_boxplot()
# box plot showing differences between sex concerning high consumption of alcohol and absences in school
plot_alc_use_absences_by_sex <- alc %>%
ggplot(aes(x = high_use, y = absences, col = sex)) +
geom_boxplot()
# box plot showing differences between sex concerning high consumption of alcohol and final grades
plot_alc_use_grades_by_sex <- alc %>%
ggplot(aes(x = high_use, y = G3, col = sex)) +
geom_boxplot()
# show and combine all plots
plot_alc_use_goout_by_sex + plot_alc_use_absences_by_sex + plot_alc_use_grades_by_sex + plot_layout(guides = 'collect')
Concerning my hypotheses, this means the following:
We use a logistic regression to statistically explore the relationship between the binary high / low alcohol consumption variable as the target variable, and the explanatory variables gender (sex), how often students go out (goout), their absences at school (absences), and their final grades (G3).
# find the model with glm()
model_high_use_sex_goout_absences_grades <- glm(high_use ~ sex + goout + absences + G3, data = alc, family = "binomial")
# print out a summary of the model
summary(model_high_use_sex_goout_absences_grades) # G3 (grades) not statistically significant
##
## Call:
## glm(formula = high_use ~ sex + goout + absences + G3, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.9500 -0.7990 -0.5263 0.8110 2.4703
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.58037 0.70357 -5.089 3.60e-07 ***
## sexM 1.01859 0.25986 3.920 8.87e-05 ***
## goout 0.70551 0.12167 5.799 6.68e-09 ***
## absences 0.08263 0.02267 3.645 0.000268 ***
## G3 -0.04508 0.03985 -1.131 0.258029
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 372.82 on 365 degrees of freedom
## AIC: 382.82
##
## Number of Fisher Scoring iterations: 4
We can see that absences, goout, and sexM show very significant positive correlations with a high alcohol consumption (p < 0.001), whereas the final grades (G3) do not show a significant relationship with alcohol consumption. sexM indicates that males are more likely to drink a lot of alcohol compared to females. In other words, male students, students who miss a lot of classes, and students who go out more often drink more alcohol.
# compute odds ratios (OR)
OR <- coef(model_high_use_sex_goout_absences_grades) %>%
exp
# compute confidence intervals (CI)
CI <- confint(model_high_use_sex_goout_absences_grades) %>%
exp
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.02786536 0.006654554 0.1057209
## sexM 2.76927745 1.674593890 4.6488214
## goout 2.02488717 1.604625853 2.5882364
## absences 1.08614409 1.040269147 1.1384190
## G3 0.95592337 0.883818334 1.0338017
The odds ratios show that the odds that males have a high alcohol consumption are 2.8 with a 95 % confidence interval ranging from 1.7 to 4.6. Similarly, the odds of students going out more having a high alcohol consumption are 2 with a confidence interval ranging between 1.6 and 2.6.
With our logistic regression model, it is possible to make predictions, and explore how well the models actually predicts the target variable high alcohol consumption. We can do this proving a 2x2 cross tabulation of predictions versus the actual values, and a graphic visualisation of predictions and actual values (see below). To improve our model, we remove the explanatory variable that didn’t have a significant relationship with the alcohol consumption (G3).
# run new model without grades (G3) because they had no significant influence on high_use
model_high_use_sex_goout_absences <- glm(high_use ~ sex + goout + absences, data = alc, family = "binomial")
# print out summary of model
summary(model_high_use_sex_goout_absences)
##
## Call:
## glm(formula = high_use ~ sex + goout + absences, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.8060 -0.8090 -0.5248 0.8214 2.4806
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -4.18142 0.48085 -8.696 < 2e-16 ***
## sexM 1.02223 0.25946 3.940 8.15e-05 ***
## goout 0.72793 0.12057 6.038 1.56e-09 ***
## absences 0.08478 0.02266 3.741 0.000183 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 374.10 on 366 degrees of freedom
## AIC: 382.1
##
## Number of Fisher Scoring iterations: 4
# predict() the probability of high_use and add to alc data frame
alc <- mutate(alc, probability = predict(model_high_use_sex_goout_absences, type = "response"))
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)
# tabulate the target variable versus the predictions: 2x2 cross tabulation
table(high_use = alc$high_use, prediction = alc$prediction) %>%
addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 242 17 259
## TRUE 61 50 111
## Sum 303 67 370
# tabulate the target variable versus the predictions in percentages
table(high_use = alc$high_use, prediction = alc$prediction) %>%
prop.table() %>%
addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.65405405 0.04594595 0.70000000
## TRUE 0.16486486 0.13513514 0.30000000
## Sum 0.81891892 0.18108108 1.00000000
# plot 'high_use' versus 'probability'
plot_probability_high_use_prediction <- alc %>%
ggplot(aes(x = probability, y = high_use, col = prediction)) +
geom_point()
plot_probability_high_use_prediction
We can see that our model predicts the target value very well: if student weren’t classified within the category of high alcohol consumption (high use = FALSE), the model predicted this correctly for 242 out of 259 students in 65% of the cases. If students were classified as drinking a lot of alcohol (high use = TRUE), the model predicted this correctly for 50 out of 111 students, in 13 % of the cases. This is also visible in the figure: In the upper line showing observations for students with a high alcohol consumption (TRUE), most observations can be found on the right side of the plot (probability > 0.5, prediction = TRUE in blue). Opposite to this, the lower line shows that students with a low alcohol consumption (FALSE), most observations lie on the left side (probability < 0.5, prediction = FALSE in red).
It is also possible to compute the total proportion of inaccurately classified individuals (= the training error). As we can see below, the model predicts around 30% (0.3) of the observations wrongly. Compared to 70% of correct predictions, this is an acceptable amount.
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = 0)
## [1] 0.3
# call loss_func to compute the average number of right predictions in the (training) data
loss_func(class = alc$high_use, prob = 1)
## [1] 0.7
Cross-validation is a method which we can use to get a good estimate of the actual predictive power and the model. We can also use it to compre different models. As we can see below, my model shows an error of 0.21 with a 10-fold cross-validation. This is lower than the model in the Exercise set (error = 0.26).
library(boot)
# 10-fold cross-validation
ten_fold_cross_validation <- cv.glm(data = alc, cost = loss_func, glmfit = model_high_use_sex_goout_absences, K = 10)
# average number of wrong predictions in the cross validation
ten_fold_cross_validation$delta[1]
## [1] 0.2135135
This file describes the work and results of the fourth week a. k. a. “Clustering and classification” of the IDOS2023 course.
The Boston data set from the MASS package in R consists of information on different characteristics for suburbs in Boston, Massachusetts, US. Variables include, amongst others:
The data set contains 506 rows (observations) and 14 columns (variables), of which all are numerical values and none are characters. chas is a binary integer, and rad an integer number. More information on the data set, its variables the abbreviations can be found here.
# access all packages needed in this chunk
library(MASS)
library(dplyr)
library(tidyverse)
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
The black-and-white plot matrix below shows that numerous variables are grouped in two groups, e. g., chas (binary), rad, and tax. crim and zn indicate that many observations are 0 (i. e., no crimes and no large residential home, respectively). Most of the variables are not normally distributed: the proportion of owner-occupied units built prior to 1940 (age) is high, as well as the amount of black people living in the suburbs (black).
# access all packages needed in this chunk
library(dplyr)
library(tidyverse)
# show summaries of variables
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
# plot matrix of the variables
pairs(Boston)
# histograms
Boston %>%
gather() %>%
ggplot(aes(x=value)) + geom_histogram(binwidth = 1) + facet_wrap('key', scales='free')
When we look at the coloured correlation matrix below, we can see the correlations between the variables more clearly. Big circles show a strong correlation, whereas small show a weak or no correlation, also noticeable with faint colour. The blue and red circles indicate a positive and negative correlation, respectively. A very strong positive correlation can, e. g., be seen between rad and tax. A strong negative collrelation can, e. g., be seen between age and dis (weighted mean of distances to five Boston employment centers).
# access all packages needed in this chunk
library(corrplot)
library(dplyr)
# calculate the correlation matrix and round it
cor_matrix <- cor(Boston) %>%
round(digits = 2)
# visualise the correlation matrix
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)
To perform a linear discriminant analysis, it is necessary to scale the data. For this, we subtract the column means from the corresponding columns and divide the difference with the standard deviation:
\[scaled(x) = \frac{x - mean(x)}{ sd(x)}\]
When we look at the scaled data, we can see in the summary that the mean of all variables equals 0. Similarly, the standard deviation is 1 for all variables (not seen for all variables).
# center and standardize variables
boston_scaled <- Boston %>%
scale()
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# change crim to numeric
boston_scaled$crim <- as.numeric(boston_scaled$crim)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
sd(boston_scaled$zn)
## [1] 1
sd(boston_scaled$age)
## [1] 1
# create a quantile vector of crim
bins <- quantile(boston_scaled$crim)
# create a categorical variable 'crime'
labels <- c("low", "med_low", "med_high", "high")
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = labels)
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
# divide data set in test and train
# number of rows in the Boston dataset
n <- nrow(boston_scaled)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
In order to then predict what might happen in Boston’s suburbs in the future, we need to know how well the model we will use works. For this, we split the original data set into a train (80% of the data) and test set (20 % of the data). We can then train the model with the train set and predict with the test set.
Linear discriminant analysis is a statistical method that tries to find linear combinations of explanatory variables and group them in differences that are as large as possible. It weighs the explanatory variables (predictors), creates functions out of it (so-called linear discriminant functions, i. e., LD1, LD2, LD3, see below) and distinguishes them as much as possible.
From the summary below, we can see that based on the training data, 25 % of the data set belongs to the low group, 25 % to med_low, 24% to med_high and 26% to high, respectively (“Prior probabilities of groups”). The proportion of trace shows the between-class variance in the different linear discriminant functions. Hence, 96.5% of the between-class variance is explained by the first linear discriminant function (LD1). The coefficients (of linear discriminants) indicate that rad (index of accessibility to radial highways) is very well represented in LD1 (4.06) compared to all other variables ranging around 0.
# crime = target variable, . = all other (explanatory) variables
lda.fit <- lda(crime ~ ., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2351485 0.2475248 0.2450495 0.2722772
##
## Group means:
## zn indus chas nox rm age
## low 0.9290593 -0.8694412 -0.106556426 -0.8452771 0.4257276 -0.8243451
## med_low -0.1182824 -0.2789913 0.003267949 -0.5879136 -0.1167839 -0.4048797
## med_high -0.3737674 0.1435402 0.125357825 0.3677850 0.0794748 0.4328663
## high -0.4872402 1.0169558 -0.093369966 1.0498174 -0.4015220 0.8029263
## dis rad tax ptratio black lstat
## low 0.7953199 -0.6917322 -0.7596346 -0.48463936 0.3838651 -0.75387732
## med_low 0.3700278 -0.5466022 -0.4918629 -0.07784658 0.3628689 -0.17548697
## med_high -0.3668996 -0.4412797 -0.3527969 -0.27480023 0.1348722 -0.05706542
## high -0.8423269 1.6397657 1.5152267 0.78268316 -0.7778077 0.87403996
## medv
## low 0.52920937
## med_low 0.01382975
## med_high 0.13185109
## high -0.68302992
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.07546570 0.86332960 -1.18275029
## indus 0.05490067 -0.08933777 0.15560210
## chas -0.12919427 -0.02855052 0.12402187
## nox 0.30858020 -0.65856823 -1.21637015
## rm -0.10446925 -0.08814759 -0.03363678
## age 0.24965607 -0.38786012 -0.33224179
## dis -0.09088826 -0.24029972 0.33830435
## rad 3.39076171 1.01745769 -0.23715643
## tax 0.03018729 -0.24031926 0.88407995
## ptratio 0.09381865 0.13706298 -0.34424564
## black -0.09635558 0.00261990 0.16256367
## lstat 0.20138019 -0.13082997 0.50793898
## medv 0.15294946 -0.21126281 -0.13786459
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9551 0.0312 0.0138
# load function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
graphics::arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results (select both lines and execute them at the same time!)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)
This is confirmed by looking at the plot, where rad seems to be the only variable strongly influencing LD1. We can also see the groups of observations vary a lot in LD1 (x-axis), especially the high group clustered on the other end. LD2 (y-axis) does not show a discriminative power, so does not capture / group the differences in the explanatory variables well.
After training the model, we can now predict classes with the LDA model on the test data. If we look at the categorical accuracy, we can see that the accuracy of the predictions for high is highest, followed by med_low, low, and med_high (95%, 64%, 60%, and 42%, respectively). This is also evident in the cross-tabulation. Most high values were correctly predicted, only 1 was wrongly predicted as med_high.
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
conf <- table(correct = correct_classes, predicted = lda.pred$class)
conf
## predicted
## correct low med_low med_high high
## low 17 14 1 0
## med_low 5 13 8 0
## med_high 0 7 18 2
## high 0 0 0 17
# calculate precision
diag(conf) / rowSums(conf)
## low med_low med_high high
## 0.5312500 0.5000000 0.6666667 1.0000000
To state whether objects are similar to one another or not, we can also measure distances. The most common distance measure is the Euclidean distance, which is the length of a straight line (distance) between two points and its x and y coordinate.
K-means clustering is a commonly used clustering method to assign observations to groups (a. k. a. clusters) based on how similar they are.
# reload Boston data set
library(MASS)
data("Boston")
# standardise data set
boston_scaled <- as.data.frame(scale(Boston))
boston_scaled$crim <- as.numeric(boston_scaled$crim)
# euclidean distance matrix
dist_eu <- dist(boston_scaled)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
# k-means clustering with 2 clusters
km <- kmeans(boston_scaled, centers = 2)
pairs(boston_scaled, col = km$cluster)
# k-means clustering with 3 clusters
km <- kmeans(boston_scaled, centers = 3)
pairs(boston_scaled, col = km$cluster)
# k-means clustering with 4 clusters
km <- kmeans(boston_scaled, centers = 4)
pairs(boston_scaled, col = km$cluster)